======================= VASP6 How-to ======================= Last updated on 6-7-2023 This page includes the following items and is based on `VASP's official documentations `_. - How to install VASP.6.4.1 on CSUSB's JupyterHub server such as `csusb-hpc `_ (:ref:`section_installation`). - How to run VASP (:ref:`section_running`). - Troubleshooting and tips (:ref:`section_troubleshooting`). .. note:: ``VASP``, including VASP.6.4.1, is a licensed program. It can only be installed on the server whose owner has the license for the program. .. note:: Some parts of this document are based on Dr. Dung Vu's work, especially the ``makefile.include.nvhpc_omp_acc`` file and the troubleshooting section. .. _section_installation: Installation ------------- We provide two installation methods for CPU and GPU. ``VASP6`` provides several configurations. For CPU installation, we use ``makefile.include.intel_omp`` and for GPU installation, we use ``makefile.include.nvhpc_omp_acc``. The CPU installation uses oneAPI and is similar to the VASP5 installation; no changes to the ``makefile.include`` file is needed. The GPU installation `NVIDIA HPC SDK `_. Loading toolkits ~~~~~~~~~~~~~~~~ Intel and Nvidia provide ``apt`` installation. To keep them in the user's folder to avoid installing them multiple times, simply copy them into the user's folder. Visit `Intel oneAPI `_ and `NVIDIA HPC SDK `_ for ``apt`` installation details. .. See :doc:`Intel oneAPI Base+HPC ` and :doc:`Nvidia HPC SDK `. VASP.6.4.1 installation ------------------------ ``VASP.6.4.1`` comes in a tarball file. Assuming that you have ``vasp.6.4.1.tar`` in your home folder, run the following commands in the terminal to untar. .. code-block:: cd ~ tar -xvf vasp.6.4.1.tgz cd vasp.6.4.1 Now, to build ``vasp_std, vasp_gam, vasp_ncl`` execute the following lines. You may replace ``all`` by ``std``, ``gam``, or ``ncl`` to build only the necessary ones. CPU-only installation ~~~~~~~~~~~~~~~~~~~~~~ Load the Intel's oneAPI and run the following lines. .. code-block:: source ~/intel/oneapi/setvars.sh # this may depend on your installation method cp arch/makefile.include.intel_omp ./makefile.include make DEPS=1 -j1 all # make -j all # use this to build with multiple cores For GPU installation ~~~~~~~~~~~~~~~~~~~~~ First, copy the makefile and update the following lines. .. code-block:: cp arch/makefile.include.nvhpc_omp_acc ./makefile.include .. warning:: Unlike the CPU-only installation, we need to modify the ``makefile.include`` file and need to install `FFTW3 `_ and ``rsync``. .. code-block:: # If the above fails, then NVROOT needs to be set manually NVHPC ?= /home/jovyan/opt/nvidia/hpc_sdk # depends on your NVIDIA HPC SDK installation NVVERSION = 23.5 # depends on your NVIDIA HPC SDK version NVROOT = $(NVHPC)/Linux_x86_64/$(NVVERSION) ## Improves performance when using NV HPC-SDK >=21.11 and CUDA >11.2 OFLAG_IN = -fast -Mwarperf SOURCE_IN := nonlr.o # FFTW (mandatory) FFTW_ROOT ?= /opt/conda # depends on your FFTW installation LLIBS += -L$(FFTW_ROOT)/lib -lfftw3 -lfftw3_omp INCS += -I$(FFTW_ROOT)/include Lastly, install ``FFTW3`` using conda and ``rsync``. (If you used a different method to install ``FFTW`` update the variable ``FFTW_ROOT`` above, too.) .. code-block:: conda install --yes fftw && \ sudo apt update && sudo install --yes rsync We are ready to build. .. code-block:: export NVHPC_INSTALL_DIR=/home/jovyan/.local_sdk/opt/nvidia/hpc_sdk export NVCOMPILERS=$NVHPC_INSTALL_DIR export NVARCH=`uname -s`_`uname -m` export NVHPC_VERSION=2023 export NVROOT=$NVCOMPILERS/$NVARCH/$NVHPC_VERSION export MANPATH=$MANPATH:$NVROOT/compilers/man export PATH=$NVROOT/compilers/bin:$PATH export PATH=$NVROOT/comm_libs/mpi/bin:$PATH export LD_LIBRARY_PATH=$NVROOT/compilers/extras/qd/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/compilers/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/comm_libs/11.0/nccl/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/comm_libs/mpi/lib/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/cuda/11.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/math_libs/11.0/targets/x86_64-linux/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/math_libs/11.8/targets/x86_64-linux/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$NVROOT/math_libs/12.1/targets/x86_64-linux/lib:$LD_LIBRARY_PATH make DEPS=1 -j1 all # make -j all # use this to build with multiple cores .. _section_running: Running VASP on JupyterHub --------------------------- CPU-only job with Intel's oneAPI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First load ``oneAPI``. Here is some sample usage. * :samp:`mpiexec -n {numprocs} {vasp}` * :samp:`mpiexec -n {numprocs} -genv OMP_NUM_THREADS={numthreads} {vasp}` .. describe:: VASP binary file. .. cmdoption:: -n {numprocs} The number of processes. E.g., ``-n 16`` will run VASP with ``16`` processes. .. cmdoption:: -x OMP_NUM_THREADS={numthreads} It sets the number of threads per process. The total number of cores used follows the formula ``numprocs x numthreads``. .. seealso:: Check out VASP's Documentation https://www.vasp.at/wiki/index.php/Combining_MPI_and_OpenMP for details. .. warning:: The default ``OMP_NUM_THREADS`` may not be 1 and this may lead one to choose unexpected parameters. The first few lines of VASP display the number of threads used. .. code:: running 24 mpi-ranks, with 4 threads/rank, on 1 nodes distrk: each k-point on 24 cores, 1 groups distr: one band on 1 cores, 24 groups vasp.6.4.1 05Apr23 (build Jun 03 2023 03:01:26) complex NVIDIA GPU job with NVIDIA's HPC SDK ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ First load ``Nvidia HPC SDK``. Here is some sample usage. * :samp:`mpiexec -n {numprocs} {vasp}` * :samp:`mpiexec -n {numprocs} -genv OMP_NUM_THREADS={numthreads} {vasp}` .. describe:: VASP binary file. .. cmdoption:: -n {numprocs} The number of GPUs. E.g., ``-n 2`` will run VASP with ``2`` processes one for each GPU. .. caution:: VASP suggests having one process per one GPU. .. cmdoption:: -x OMP_NUM_THREADS={numthreads} It sets the number of threads per process. The total number of cores used follows the formula ``numprocs x numthreads``. .. seealso:: Check out VASP's Documentation https://www.vasp.at/wiki/index.php/Combining_MPI_and_OpenMP for details. .. warning:: The default ``OMP_NUM_THREADS`` may not be 1 and this may lead one to choose unexpected parameters. The first few lines of VASP display the number of threads used. .. code:: running 24 mpi-ranks, with 4 threads/rank, on 1 nodes distrk: each k-point on 24 cores, 1 groups distr: one band on 1 cores, 24 groups vasp.6.4.1 05Apr23 (build Jun 03 2023 03:01:26) complex .. _section_troubleshooting: Troubleshooting ----------------- .. note:: 1. ``forrtl: severe (174): SIGSEGV, segmentation fault occurred``: run the following command before you start vasp. .. code-block:: ulimit -s unlimited 2. ``Accelerator Fatal Error: call to cuMemAlloc returned error 2: Out of memory``: GPU out of memory. 3. ``mpirun noticed that process rank 0 with PID 0 on node jupyter-vasp6-2dnv-2damd exited on signal 4 (Illegal instruction)``: Intel vs AMD cpus. Use ``vasp6_std_nv_amd`` for AMD cpus (``lscpu | grep "Vendor ID"`` to check the cpu type.) 4. ``FIO/stdio: Disk quota exceeded``: Need to include storage size. Contact the admin. 5. Character error bewteen Windows and Linux: ``dos2unix `` Miscellaneous --------------- When running VASP on JupyterHub, it may be useful to have a way to `share `_ your workspace with your collaborators and `monitor `_ system resources. The following command will add both packages. .. code-block:: pip install --user jupyterlab-link-share jupyter-resource-usage